Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
AAC-Hunter: efficient algorithm for discovering aggregation algebraic constraints in relational databases
ZHANG Xiaowei, JIANG Dawei, CHEN Ke, CHEN Gang
Journal of Computer Applications    2021, 41 (3): 636-642.   DOI: 10.11772/j.issn.1001-9081.2020091473
Abstract326)      PDF (1077KB)(584)       Save
In order to better maintain the data integrity and help auditors find anomalous reimbursement records in relational databases, the algorithm AAC-Hunter (Aggregation Algebraic Constraints Hunter), which discovered Aggregation Algebraic Constraints (AACs) automatically, was proposed. An AAC is a fuzzy constraint defined between the aggregation results of two columns in the database and acts on most but not all records. Firstly, joining, grouping and algebraic expressions were enumerated to generate candidate AACs. Secondly, the value range sets of these candidate AACs were calculated. Finally, the AAC results were output. However, this method was not able to face the performance challenges caused by massive data, so that a set of heuristic rules were applied to decrease the size of candidate constraint space and the optimization strategies based on intermediate results reuse and trivial candidate AACs elimination were employed to speed up the value range set calculation for candidate AACs. Experimental results on TPC-H and European Soccer datasets show that AAC-Hunter reduces the constraint discovery space by 95.68% and 99.94% respectively, and shortens running time by 96.58% and 92.51% respectively, compared with the baseline algorithm without heuristic rules or optimization strategies. As the effectiveness of AAC-Hunter is verified, it can be seen that AAC-Hunter can improve the efficiency and capability of auditing application.
Reference | Related Articles | Metrics
Graph convolutional network model using neighborhood selection strategy
CHEN Kejia, YANG Zeyu, LIU Zheng, LU Hao
Journal of Computer Applications    2019, 39 (12): 3415-3419.   DOI: 10.11772/j.issn.1001-9081.2019071281
Abstract724)      PDF (759KB)(720)       Save
The composition of neighborhoods is crucial for the spatial domain-based Graph Convolutional Network (GCN) model. To solve the problem that the structural influence is not considered in the neighborhood ordering of nodes in the model, a novel neighborhood selection strategy was proposed to obtain an improved GCN model. Firstly, the structurally important neighborhoods were collected for each node and the core neighborhoods were selected hierarchically. Secondly, the features of the nodes and their core neighborhoods were organized into a matrix. Finally, the matrix was sent to deep Convolutional Neural Network (CNN) for semi-supervised learning. The experimental results on Cora, Citeseer and Pubmed citation network datasets show that, the proposed model has a better accuracy in node classification tasks than the model based on classical graph embedding and four state-of-the-art GCN models. As a spatial domain-based GCN, the proposed model can be effectively applied to the learning tasks of large-scale networks.
Reference | Related Articles | Metrics
Automatic camera calibration method for video-based vehicle speed detection
CHEN Ke
Journal of Computer Applications    2017, 37 (8): 2307-2312.   DOI: 10.11772/j.issn.1001-9081.2017.08.2307
Abstract758)      PDF (1160KB)(741)       Save
To address the issue of inefficiency and inconvenience in camera's manual calibration required by the existing video-based vehicle speed detection, a robust algorithm was proposed to carry out automatic calibration of typically-configured road-monitoring video camera, including calibration of its focal length, pitch angle and distance to the road surface. The relationship between vanishing points formed by two groups of orthogonal lines was used to calibrate the focal length and pitch angle of the camera. Then the statistical width data collected from multiple vehicles detected in video stream was used to extract the distance between the camera and the road surface. The experimental results show that the proposed algorithm has high efficiency, excellent accuracy and high reliability. The proposed algorithm can be deployed as an effective automatic camera calibration way in helping the existing traffic monitoring video devices to carry out automatic collection, analysis and monitoring of vehicle speed, vehicle type, traffic flow data and enforcing traffic speed limit law.
Reference | Related Articles | Metrics
Algorithm design of early detection and offset polyline decoding for IRA codes
BAO Zhixiang, LYU Na, CHEN Kefan
Journal of Computer Applications    2015, 35 (6): 1541-1545.   DOI: 10.11772/j.issn.1001-9081.2015.06.1541
Abstract448)      PDF (709KB)(390)       Save

Irregular Repeat Accumulate (IRA) codes' decoding usually adopts Belief Propagation (BP) decoding algorithm, but BP decoding algorithm needs hyperbolic tangent calculation, so its hardware implementation is very difficult because of the high complexity. A decoding algorithm combining the early detection mechanism and offset polyline was put forward. Its performance would approach to BP algorithm via non-uniform error compensation for polyline approximation decoding algorithm. And the early detection method was introduced which observed the transmitted information of check nodes in advance, judged the lines' log-likelihood value which had negligible influence on the next iteration and moved it out of the iteration. So the computational complexity of next iterations was reduced. The simulation results show that the proposed algorithm greatly reduces the computational complexity through the offset polyline approximating the hyperbolic tangent, and the decoding performance is close to BP algorithm.

Reference | Related Articles | Metrics
Data quality assessment of Web article content based on simulated annealing
HAN Jingyu CHEN Kejia
Journal of Computer Applications    2014, 34 (8): 2311-2316.   DOI: 10.11772/j.issn.1001-9081.2014.08.2311
Abstract320)      PDF (1008KB)(327)       Save

Because the existing Web quality assessment approaches rely on trained models, and users' interactions not only cannot meet the requirements of online response, but also can not capture the semantics of Web content, a data Quality Assessment based on Simulated Annealing (QASA) method was proposed. Firstly, the relevant space of the target article was constructed by collecting topic-relevant articles on the Web. Then, the scheme of open information extraction was employed to extract Web articles' facts. Secondly, Simulated Annealing (SA) was employed to construct the dimension baselines of two most important quality dimensions, namely accuracy and completeness. Finally, the data quality dimensions were quantified by comparing the facts of target article with those of the dimension baselines. The experimental results show that QASA can find the near-optimal solutions within the time window while achieving comparable or even 10 percent higher accuracy with regard to the related works. The QASA method can precisely grasp data quality in real-time, which caters for the online identification of high-quality Web articles.

Reference | Related Articles | Metrics
Implementation algorithm of spherical screen projection system via internal projection
CHEN Ke WU Jianping
Journal of Computer Applications    2014, 34 (3): 810-814.   DOI: 10.11772/j.issn.1001-9081.2014.03.0810
Abstract542)      PDF (1019KB)(385)       Save

Addressing the issue of computer processing in the internal spherical screen projection, an internal spherical screen projection algorithm was proposed based on virtual spherical transform and virtual fisheye lens mapping. Concerning the spherical screen output distortion caused by irregular fisheye projection, a sextic polynomial of distortion correction algorithm based on the equal-solid-angle mapping function was presented to approximate any fisheye mapping function to eliminate the distortion. The six coefficients of the polynomial could be obtained via solving a linear algebra equation. The experimental results show this method is able to completely eradicate the spherical screen projection distortion. Addressing the illumination distribution modification stemming from the spherical screen projection, an illumination correction algorithm based on the cosine of the projecting angles was also proposed to eliminate the illumination distribution change. The experimental results show the illumination distribution correction method successfully recovers the originally severely modified illumination distribution into the illumination distribution almost identical to the original picture. This algorithm has theoretical instructive importance and significant practical application values for design and software development of the spherical projection systems.

Related Articles | Metrics
Similar key posture transformation based on hierarchical Option for humanoid robot
KE Wende PENG Zhiping CHEN Ke XIANG Shunbo
Journal of Computer Applications    2013, 33 (05): 1301-1304.   DOI: 10.3724/SP.J.1087.2013.01301
Abstract806)      PDF (630KB)(572)       Save
Concerning the problem in which the fixed locomotion track captured from human movement can not be used in transformation between key postures for humanoid robot, a method of similar key posture transformation based on hierarchical Option for humanoid robot was proposed. The multi-level dendrogram of key postures was constructed and the difference of key postures was illustrated in respects of similar joint difference, moment total similar difference, period total similar difference. The hierarchical reinforcement Option learning was introduced, in which the sets of key postures and Option actions were constructed. SMDP-Q method tended to be the optimal Option function by the accumulative rewards of key posture difference and the transformations were realized. The experiments show the validity of the method.
Reference | Related Articles | Metrics
New task negotiation model of multiple mobile-robots
KE Wende PENG Zhiping CHEN Ke CAI Zesu
Journal of Computer Applications    2013, 33 (02): 346-349.   DOI: 10.3724/SP.J.1087.2013.00346
Abstract796)      PDF (635KB)(380)       Save
Concerning the problems of lacking the mind states and task handling capability, low real-time caused by congested bandwidth and slow learning from negotiation history, a task negotiation model for multiple mobile-robots was proposed. Firstly, the basic moving state of robot was shown. Secondly, the states of mind (belief, goal, intention, knowledge update, etc.) and ability (cooperation, capability judgment, task allocation, etc.) based on π calculus for the negotiation of multiple mobile-robots were defined. Thirdly, the negotiation model of multiple mobile-robots was constructed, in which the negotiation period, negotiation task, utility estimation, negotiation allocation protocol, learning mechanism were studied. Finally, the validity of model was proved through experiments of robot soccer.
Related Articles | Metrics
Application of active learning to recommender system in communication network
CHEN Ke-jia HAN Jing-yu ZHENG Zheng-zhong ZHANG Hai-jin
Journal of Computer Applications    2012, 32 (11): 3038-3041.   DOI: 10.3724/SP.J.1087.2012.03038
Abstract1371)      PDF (630KB)(439)       Save
The existence of potential links in sparse networks becomes a big challenge for link prediction. The paper introduced active learning into the link prediction task in order to mine the potential information of a large number of unconnected node pairs in networks. The most uncertain ones of the unlabeled examples to the system were selected and then labeled by the users. These examples would give the system a higher information gain. The experimental results in a real communication network dataset Nodobo show that the proposed method using active learning improves the accuracy of predicting potential contacts for communication users.
Reference | Related Articles | Metrics
Analysis of single-signon system based on SAML artifact technology
CHEN Ke,SHE Kun,HUANG Di-ming
Journal of Computer Applications    2005, 25 (11): 2574-2576.  
Abstract1392)      PDF (620KB)(1370)       Save
Single-signon technology enables users to use multiple Web services without repetitious login which makes user identity management more convenient and secure.SAML is the criterion to implement a single-signon system.It provides a standard for Web services to exchange user identity authentication information.The implementation of SAML artifact technology in single-signon system was mainly discussed.The flow and security of this single-signon system was analyzed.
Related Articles | Metrics
Improved clustering algorithm based on density and grid in the presence of obstacles
YAN Xin,ZHOU Li-hua,CHEN Ke-ping,XU Guang-yi
Journal of Computer Applications    2005, 25 (08): 1818-1820.   DOI: 10.3724/SP.J.1087.2005.01818
Abstract1312)      PDF (194KB)(863)       Save
An improved grid diffusant clustering algorithm in the presence of obstacles called DCellO1 was proposed. Based on grid,it combined density-based clustering algorithm with seed-filling algorithm of graphics. It could construct arbitrary shape clustering in the presence of obstacles, and could obtain good clustering results when the objects distributed unevenly. The experiments prove the superiority and effectiveness of DCellO1.
Related Articles | Metrics
Study on SDTF*PDF algorithm implemented in system of topic retrieval from short Chinese passages
CHEN Ke, JIA Yan, YANG Shu-qiang, WANG Yong-heng
Journal of Computer Applications    2005, 25 (01): 14-16.   DOI: 10.3724/SP.J.1087.2005.00014
Abstract1297)      PDF (166KB)(1242)       Save
More and more information, especially text information,has spread widely on Internet. To detect hot topics from plenty of Chinese text information,a term weight counting algorithm SDTF*PDF(Short Document Term Frequency * Proportional Document Frequency)was discussed. There were lots of channels in the system implementing this algorithm of detecting topics from short Chinese passages, and the passages in channels were usually short. Results worked out by it indicate that the system of detecting topic from short Chinese passages based on this algorithm can accurately extract the hot topics in a period of time, a day or a week, from enormous Chinese text information.
Related Articles | Metrics